Goto

Collaborating Authors

 Glendale


Elon Musk, AI and the antichrist: the biggest tech stories of 2025

The Guardian

Elon Musk receives a golden key from Donald Trump in the Oval Office at the White House in Washington DC on 30 May 2025. Elon Musk receives a golden key from Donald Trump in the Oval Office at the White House in Washington DC on 30 May 2025. I myself have a cold. Today, we are looking back at the biggest stories in tech of 2025 - Elon Musk's political rise, burst, and fall; artificial intelligence's subsumption of the global economy, all other technology, and even the Earth's topography; Australia's remarkable social media ban; the tech industry's new Trumpian politics; and, as a treat, a glimpse of the apocalypse offered by one of Silicon Valley's savviest and strangest billionaires. Tesla CEO Elon Musk attends a memorial service for slain far-right commentator Charlie Kirk at State Farm Stadium, in Glendale, Arizona, on 21 September 2025.


What will be Tyler Robinson's defense strategy? Experts weigh in on accused Charlie Kirk assassin

FOX News

Legal experts analyze the challenging defense strategy for Tyler Robinson, who allegedly shot Charlie Kirk at Utah Valley University, as prosecutors prepare evidence for trial.


Lawmakers call to remove Charlie Kirk assassination videos

FOX News

Lawmakers pressure social media companies to remove Charlie Kirk assassination footage as platforms struggle with content moderation of graphic violence.


The TESS Ten Thousand Catalog: 10,001 uniformly-vetted and -validated Eclipsing Binary Stars detected in Full-Frame Image data by machine learning and analyzed by citizen scientists

Kostov, Veselin B., Powell, Brian P., Fornear, Aline U., Di Fraia, Marco Z., Gagliano, Robert, Jacobs, Thomas L., de Lambilly, Julien S., Luca, Hugo A. Durantini, Majewski, Steven R., Omohundro, Mark, Orosz, Jerome, Rappaport, Saul A., Salik, Ryan, Short, Donald, Welsh, William, Alexandrov, Svetoslav, da Silva, Cledison Marcos, Dunning, Erika, Guhne, Gerd, Huten, Marc, Hyogo, Michiharu, Iannone, Davide, Lee, Sam, Magliano, Christian, Sharma, Manya, Tarr, Allan, Yablonsky, John, Acharya, Sovan, Adams, Fred, Barclay, Thomas, Montet, Benjamin T., Mullally, Susan, Olmschenk, Greg, Prsa, Andrej, Quintana, Elisa, Wilson, Robert, Balcioglu, Hasret, Kruse, Ethan, Collaboration, the Eclipsing Binary Patrol

arXiv.org Artificial Intelligence

The Transiting Exoplanet Survey Satellite (TESS) has surveyed nearly the entire sky in Full-Frame Image mode with a time resolution of 200 seconds to 30 minutes and a temporal baseline of at least 27 days. In addition to the primary goal of discovering new exoplanets, TESS is exceptionally capable at detecting variable stars, and in particular short-period eclipsing binaries which are relatively common, making up a few percent of all stars, and represent powerful astrophysical laboratories for deep investigations of stellar formation and evolution. We combed Sectors 1-82 of TESS Full-Frame Image data searching for eclipsing binary stars using a neural network that identified ~1.2 million stars with eclipse-like features. Of these, we have performed an in-depth analysis on ~60,000 targets using automated methods and manual inspection by citizen scientists. Here we present a catalog of 10001 uniformly-vetted and -validated eclipsing binary stars that passed all our ephemeris and photocenter tests, as well as complementary visual inspection. Of these, 7936 are new eclipsing binaries while the remaining 2065 are known systems for which we update the published ephemerides. We outline the detection and analysis of the targets, discuss the properties of the sample, and highlight potentially interesting systems. Finally, we also provide a list of ~900,000 unvetted and unvalidated targets for which the neural network found eclipse-like features with a score higher than 0.9, and for which there are no known eclipsing binaries within a sky-projected separation of a TESS pixel (~21 arcsec).


A New Way to Fix the Housing Crisis

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Two decades ago, the fire marshal in Glendale, Arizona, was concerned that the elevators in a new stadium wouldn't be large enough to accommodate a 7-foot stretcher held flat. Tilting a stretcher to make it fit in the cab, the marshal worried, might jeopardize the treatment of a patient with a back injury. Maybe our elevators should be bigger, he thought. The marshal put this idea to the International Code Council, the organization that governs the construction of American buildings. After minor feedback and minimal research (the marshal measured three stretchers in the Phoenix area), the suggestion was incorporated into the ICC's model code.


Identifying Cyberbullying Roles in Social Media

Sandoval, Manuel, Abuhamad, Mohammed, Furman, Patrick, Nazari, Mujtaba, Hall, Deborah L., Silva, Yasin N.

arXiv.org Artificial Intelligence

Social media has revolutionized communication, allowing people worldwide to connect and interact instantly. However, it has also led to increases in cyberbullying, which poses a significant threat to children and adolescents globally, affecting their mental health and well-being. It is critical to accurately detect the roles of individuals involved in cyberbullying incidents to effectively address the issue on a large scale. This study explores the use of machine learning models to detect the roles involved in cyberbullying interactions. After examining the AMiCA dataset and addressing class imbalance issues, we evaluate the performance of various models built with four underlying LLMs (i.e., BERT, RoBERTa, T5, and GPT-2) for role detection. Our analysis shows that oversampling techniques help improve model performance. The best model, a fine-tuned RoBERTa using oversampled data, achieved an overall F1 score of 83.5%, increasing to 89.3% after applying a prediction threshold. The top-2 F1 score without thresholding was 95.7%. Our method outperforms previously proposed models. After investigating the per-class model performance and confidence scores, we show that the models perform well in classes with more samples and less contextual confusion (e.g., Bystander Other), but struggle with classes with fewer samples (e.g., Bystander Assistant) and more contextual ambiguity (e.g., Harasser and Victim). This work highlights current strengths and limitations in the development of accurate models with limited data and complex scenarios.


Evaluating LLMs Capabilities Towards Understanding Social Dynamics

Tahir, Anique, Cheng, Lu, Sandoval, Manuel, Silva, Yasin N., Hall, Deborah L., Liu, Huan

arXiv.org Artificial Intelligence

Social media discourse involves people from different backgrounds, beliefs, and motives. Thus, often such discourse can devolve into toxic interactions. Generative Models, such as Llama and ChatGPT, have recently exploded in popularity due to their capabilities in zero-shot question-answering. Because these models are increasingly being used to ask questions of social significance, a crucial research question is whether they can understand social media dynamics. This work provides a critical analysis regarding generative LLM's ability to understand language and dynamics in social contexts, particularly considering cyberbullying and anti-cyberbullying (posts aimed at reducing cyberbullying) interactions. Specifically, we compare and contrast the capabilities of different large language models (LLMs) to understand three key aspects of social dynamics: language, directionality, and the occurrence of bullying/anti-bullying messages. We found that while fine-tuned LLMs exhibit promising results in some social media understanding tasks (understanding directionality), they presented mixed results in others (proper paraphrasing and bullying/anti-bullying detection). We also found that fine-tuning and prompt engineering mechanisms can have positive effects in some tasks. We believe that a understanding of LLM's capabilities is crucial to design future models that can be effectively used in social applications.


Detecting LGBTQ+ Instances of Cyberbullying

Arslan, Muhammad, Madrigal, Manuel Sandoval, Abuhamad, Mohammed, Hall, Deborah L., Silva, Yasin N.

arXiv.org Artificial Intelligence

Social media continues to have an impact on the trajectory of humanity. However, its introduction has also weaponized keyboards, allowing the abusive language normally reserved for in-person bullying to jump onto the screen, i.e., cyberbullying. Cyberbullying poses a significant threat to adolescents globally, affecting the mental health and well-being of many. A group that is particularly at risk is the LGBTQ+ community, as researchers have uncovered a strong correlation between identifying as LGBTQ+ and suffering from greater online harassment. Therefore, it is critical to develop machine learning models that can accurately discern cyberbullying incidents as they happen to LGBTQ+ members. The aim of this study is to compare the efficacy of several transformer models in identifying cyberbullying targeting LGBTQ+ individuals. We seek to determine the relative merits and demerits of these existing methods in addressing complex and subtle kinds of cyberbullying by assessing their effectiveness with real social media data.


Synergistic Multi-Agent Framework with Trajectory Learning for Knowledge-Intensive Tasks

Yue, Shengbin, Wang, Siyuan, Chen, Wei, Huang, Xuanjing, Wei, Zhongyu

arXiv.org Artificial Intelligence

Recent advancements in Large Language Models (LLMs) have led to significant breakthroughs in various natural language processing tasks. However, generating factually consistent responses in knowledge-intensive scenarios remains a challenge due to issues such as hallucination, difficulty in acquiring long-tailed knowledge, and limited memory expansion. This paper introduces SMART, a novel multi-agent framework that leverages external knowledge to enhance the interpretability and factual consistency of LLM-generated responses. SMART comprises four specialized agents, each performing a specific sub-trajectory action to navigate complex knowledge-intensive tasks. We propose a multi-agent co-training paradigm, Long- and Short-Trajectory Learning, which ensures synergistic collaboration among agents while maintaining fine-grained execution by each agent. Extensive experiments on 5 tasks demonstrate SMART's superior performance compared to previous widely adopted methods.


ReMI: A Dataset for Reasoning with Multiple Images

Kazemi, Mehran, Dikkala, Nishanth, Anand, Ankit, Devic, Petar, Dasgupta, Ishita, Liu, Fangyu, Fatemi, Bahare, Awasthi, Pranjal, Guo, Dee, Gollapudi, Sreenivas, Qureshi, Ahmed

arXiv.org Artificial Intelligence

With the continuous advancement of large language models (LLMs), it is essential to create new benchmarks to effectively evaluate their expanding capabilities and identify areas for improvement. This work focuses on multi-image reasoning, an emerging capability in state-of-the-art LLMs. We introduce ReMI, a dataset designed to assess LLMs' ability to Reason with Multiple Images. This dataset encompasses a diverse range of tasks, spanning various reasoning domains such as math, physics, logic, code, table/chart understanding, and spatial and temporal reasoning. It also covers a broad spectrum of characteristics found in multi-image reasoning scenarios. We have benchmarked several cutting-edge LLMs using ReMI and found a substantial gap between their performance and human-level proficiency. This highlights the challenges in multi-image reasoning and the need for further research. Our analysis also reveals the strengths and weaknesses of different models, shedding light on the types of reasoning that are currently attainable and areas where future models require improvement. To foster further research in this area, we are releasing ReMI publicly: https://huggingface.co/datasets/mehrankazemi/ReMI.